19 research outputs found

    Uso de descritores morfológicos e cinemáticos na identificação automática de comportamentos de animais de laboratório

    Get PDF
    Tese (doutorado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Engenharia Elétrica, Florianópolis, 2011O comportamento animal é um sinal biológico pouco explorado pelas disciplinas de processamento de sinais e inteligência computacional em Engenharia Biomédica. As neurociências usam registros e quantificações do comportamento animal para examinar os mecanismos neurais de controle comportamental. Estes registros são geralmente realizados por um observador humano, e estão sujeitos a vieses de interpretação (e.g., cansaço, experiência, e ambiguidades entre as categorias). Neste trabalho, examinou-se o uso de descritores de imagens (morfológicos: como área e comprimento; e cinemáticos, como distância percorrida e variação angular) como parâmetros de entrada de redes neurais artificiais (RNAs), na identificação automática de comportamentos de animais de laboratório. Os descritores foram extraídos de comportamentos de ratos Wistar em arenas de campo aberto (locomoção: LOC, imobilidade: IMO, limpeza corporal: LIC, exploração vertical: EXP), tratados com cafeína (2 ou 6 mg/Kg) ou com seu veículo (salina), usando um software de etografia e rastreamento desenvolvido durante essa tese (o ETHOWATCHER®). Empregou-se RNAs perceptron de múltiplas camadas (MLP), avaliadas por múltiplos índices de desempenho de diagnóstico (AUC, Kappa). Os descritores foram previamente avaliados quanto a sua relevância na diferenciação entre os comportamentos usando o teste estatístico de Kruskal-Wallis. Em animais tratados com veículo, as MLPs identificaram 97,3 ± 2,0 % dos casos de IMO (AUC, média ± desvio padrão); 95,6 ± 8,0 % de LOC; 94,6 ± 3,0 % de EXP; e 83,6% ± 16,0 % de LIC. Em animais tratados com cafeína, obteve-se: 85,2 ± 1,8 % em IMO; 83,5 ± 0,9 % em LOC; 67,0 ± 2,0 % em EXP; e 78,0 ± 10,0 % em LIC. Os resultados indicam que as MLP usando os descritores cinemáticos e morfológicos identificam com sucesso variável os comportamentos investigados. As diferenças estatisticamente significantes entre os desempenhos dos classificadores usando parâmetros relevantes e aqueles usando irrelevantes validaram o uso do teste Kruskal-Wallis na seleção de descritores adequados para a identificação comportamental. A redução de desempenho da MLP em comportamentos de animais tratados com cafeína em dose sub-efetiva (0,2 mg/Kg) pode sugerir que os procedimentos aqui utilizados são capazes de detectar variações em padrões morfológicos e cinemáticos dos comportamentos (Mann-Whitney, p<0,05) não detectáveis pelos procedimentos usuais de análise comportamental. Embora reduzido, o desempenho da MLP foi superior ao medido em observadores iniciantes no registro comportamental de um rato ingênuo a tratamento (Kappa: 35,48%), evidenciando a viabilidade do uso dessas RNAs na avaliação de alterações em padrões comportamentais

    A Multi-Sensor Approach for Activity Recognition in Older Patients

    Get PDF
    in pressInternational audienceExisting surveillance systems for older people activity analysis are focused on video and sensors analysis (e.g., accelerometers, pressure, infrared) applied for frailty assessment, fall detection, and the automatic identification of self-maintenance activities (e.g., dressing, self-feeding) at home. This paper proposes a multi-sensor surveillance system (accelerometers and video-camera) for the automatic detection of instrumental activities of daily living (IADL, e.g., preparing coffee, making a phone call) in a lab-based clinical protocol. IADLs refer to more complex activities than self-maintenance which decline in performance has been highlighted as an indicator of early symptoms of dementia. Ambient video analysis is used to describe older people activity in the scene, and an accelerometer wearable device is used to complement visual information in body posture identification (e.g., standing, sitting). A generic constraint-based ontology language is used to model IADL events using sensors reading and semantic information of the scene (e.g., presence in goal-oriented zones of the environment, temporal relationship of events, estimated postures). The proposed surveillance system is tested with 9 participants (healthy: 4, MCI: 5) in an observation room equipped with home appliances at the Memory Center of Nice Hospital. Experiments are recorded using a 2D video camera (8 fps) and an accelerometer device (MotionPod®). The multi-sensor approach presents an average sensitivity of 93.51% and an average precision of 63.61%, while the vision-based approach has a sensitivity of 77.23%, and a precision of 57.65%. The results show an improvement of the multi-sensor approach over the vision-based at IADL detection. Future work will focus on system use to evaluate the differences between the activity profile of healthy participants and early to mild stage Alzheimer's patients

    Semi-supervised understanding of complex activities from temporal concepts

    Get PDF
    International audienceMethods for action recognition have evolved considerably over the past years and can now automatically learn and recognize short term actions with satisfactory accuracy. Nonetheless, the recognition of complex activities-compositions of actions and scene objects-is still an open problem due to the complex temporal and composite structure of this category of events. Existing methods focus either on simple activities or oversimplify the modeling of complex activities by targeting only whole-part relations between its sub-parts (e.g., actions). In this paper, we propose a semi-supervised approach that learns complex activities from the temporal patterns of concept compositions (e.g., " slicing-tomato " before " pouring into-pan "). We demonstrate that our method outperforms prior work in the task of automatic modeling and recognition of complex activities learned out of the interaction of 218 distinct concepts

    BEHAVE - Behavioral analysis of visual events for assisted living scenarios

    Get PDF
    International audienceThis paper proposes BEHAVE, a person-centered pipeline for probabilistic event recognition. The proposed pipeline firstly detects the set of people in a video frame, then it searches for correspondences between people in the current and previous frames (i.e., people tracking). Finally, event recognition is carried for each person using proba-bilistic logic models (PLMs, ProbLog2 language). PLMs represent interactions among people, home appliances and semantic regions. They also enable one to assess the probability of an event given noisy observations of the real world. BEHAVE was evaluated on the task of online (non-clipped videos) and open-set event recognition (e.g., target events plus none class) on video recordings of seniors carrying out daily tasks. Results have shown that BEHAVE improves event recognition accuracy by handling missed and partially satisfied logic models. Future work will investigate how to extend PLMs to represent temporal relations among events

    FUSION FRAMEWORK FOR VIDEO EVENT RECOGNITION

    Get PDF
    International audienceThis paper presents a multisensor fusion framework for video activities recognition based on statistical reasoning and D-S evidence theory. Precisely, the framework consists in the combination of the events' uncertainty computation with the trained database and the fusion method based on the conflict management of evidences. Our framework aims to build Multisensor fusion architecture for event recognition by combining sensors, dealing with conflicting recognition, and improving their performance. According to a complex event's hierarchy, Primitive state is chosen as our target event in the framework. A RGB camera and a RGB-D camera are used to recognise a person's basic activities in the scene. The main convenience of the proposed framework is that it firstly allows adding easily more possible events into the system with a complete structure for handling uncertainty. And secondly, the inference of Dempster-Shafer theory resembles human perception and fits for uncertainty and conflict management with incomplete information. The cross-validation of real-world data (10 persons) is carried out using the proposed framework, and the evaluation shows promising results that the fusion approach has an average sensitivity of 93.31% and an average precision of 86.7%. These results are better than the ones when only one camera is used, encouraging further research focusing on the combination of more sensors with more events, as well as the optimization of the parameters in the framework for improvements

    Event Recognition System for Older People Monitoring Using an RGB-D Camera

    Get PDF
    International audienceIn many domains such as health monitoring, the semantic information provided by automatic monitoring systems has become essential. These systems should be as robust, as easy to deploy and as affordable as possible. This paper presents a monitoring system for mid to long-term event recognition based on RGB-D (Red Green Blue + Depth) standard algorithms and on additional algorithms in order to address a real world application. Using a hierarchical modelbased approach, the robustness of this system is evaluated on the recognition of physical tasks (e.g., balance test) undertaken by older people (N = 30) during a clinical protocol devoted to dementia study. The performance of the system is demonstrated at recognizing, first, human postures, and second, complex events based on posture and 3D contextual information of the scene

    Automatic prediction of autonomy in activities of daily living of older adults

    Get PDF
    Short-paper15.s.875.00 Purpose: world population is aging and the number of seniors in need of care is expected to surpass the number of young people capable of providing it. It is then quintessential to develop instruments to support doctors at the task of diagnosing and monitoring the health status of seniors 1-3. Methods to assess autonomy and functional abilities of seniors currently rely on rating scales 4. The subjective character of these scales and their dependence on human observations tend to jeopardize the timely diagnosis of deteriorations in cognitive health. We propose a probabilistic model (PM) to objectively classify a person's performance in executive functions into three classes of cognitive status: Alzheimer's disease (AD), mild cognitive impairment (MCI) and healthy control (HC); and into different levels of autonomy: good, intermediate or poor. Material & Methods: the proposed PM relies on Naïve Bayes model for classification and takes as input automatically extracted parameters about a person's performance at activities of daily living (event monitoring system, EMS, Fig. 1). To evaluate our approach participants aged 65 or older were recruited within the Dem@care project protocol, at the Memory Center of the Nice university hospital: n=49; 12 AD (5 male), 23 MCI (13) and 14 HC (5). They were asked to carry out a set of instrumental activities of daily living (IADL, e.g., medication preparation; talking on the telephone) in an observation room equipped with everyday objects. Results & Discussion: EMS recognized targeted IADLs with a high precision (e.g., 'prepare medication': 93%, 'talk on the telephone': 89%). The proposed PM achieved average classification accuracy of 73.5 % for cognitive status classes and of 83.7% for autonomy classes. Moreover, the proposed PM displayed a higher accuracy when inputted with EMS data than with human annotations of daily activities. This finding is explained by the stability of EMS recognition which permits to relate subtle deviations from activity norms to characteristic traits of target classes. Conclusion: The proposed framework provides clinicians with diagnostic relevant information to support autonomy assessment in ecological scenarios by decreasing observer biases and facilitating a more timely diagnosis of frailty patterns in senior. Further work will extend the proposed framework to other clinical sites and seek for novel cues about autonomy decline in seniors

    Combining Multiple Sensors for Event Recognition of Older People

    Get PDF
    MIRRH, held in conjunction with ACM MM 2013.International audienceWe herein present a hierarchical model-based framework for event recognition using multiple sensors. Event models combine a priori knowledge of the scene (3D geometric and semantic information, such as contextual zones and equipments) with moving objects (e.g., a Person) detected by a monitoring system. The event models follow a generic ontology based on natural language; which allows domain experts to easily adapt them. The framework novelty relies on combining multiple sensors (heterogeneous and homogeneous) at decision level explicitly or implicitly by handling their conflict using a probabilistic approach. The implicit event conflict handling works by computing the event reliabilities for each sensor, and then combine them using Dempster-Shafer Theory. The multi-sensor system is evaluated using multi-modal recording of instrumental daily living activities (e.g., watching TV, writing a check, preparing tea, organizing the week intake of prescribed medication) of participants of a clinical study of Alzheimer's disease. The evaluation presents the preliminary results of this approach on two cases: the combination of events from heterogeneous sensors (a RGB camera and a wearable inertial sensor); and the combination of conflicting events from video cameras with a partially overlapped field of view (a RGB- and a RGB-D-camera). The results show the framework improves the event recognition rate in both cases

    Evaluation of a Monitoring System for Event Recognition of Older People

    Get PDF
    International audiencePopulation aging has been motivating academic research and industry to develop technologies for the improvement of older people's quality of life, medical diagnosis, and support on frailty cases. Most of available research prototypes for older people monitoring focus on fall detection or gait analysis and rely on wearable, environmental, or video sensors. We present an evaluation of a research prototype of a video monitoring system for event recognition of older people. The prototype accuracy is evaluated for the recognition of physical tasks (e.g., Up and Go test) and instrumental activities of daily living (e.g., watching TV, writing a check) of participants of a clinical protocol for Alzheimer's disease study (29 participants). The prototype uses as input a 2D RGB camera, and its performance is compared to the use of a RGB-D camera. The experimentation results show the proposed approach has a competitive performance to the use of a RGB-D camera, even outperforming it on event recognition precision. The use of a 2D-camera is advantageous, as the camera field of view can be much larger and cover an entire room where at least a couple of RGB-D cameras would be necessary

    Alzheimer's patient activity assessment using different sensors

    Get PDF
    International audiencePurpose: Older people population is expected to grow dramatically over the next 20 years (including Alzheimer's patients), while the number of people able to provide care will decrease. We present the development of medical and information and communication technologies to support the diagnosis and evaluation of dementia progress in early stage Alzheimer disease (AD) patients.Method: We compared video and accelerometers activity assessment for the estimation of older people performance in instrumental activities of daily living (IADL) and physical tests in the clinical protocol developed by the Memory Center of the Nice Hospital and the Department of Neurology at National Cheng Kung University Hospital - Taiwan. This clinical protocol defines a set of IADLs (e.g., preparing coffee, watching TV) that could provide objective information about dementia symptoms and be realistically achieved in the two sites observation room. Previous works studied accelerometers activity assessment for the detection of changes in older people gait patterns caused by dementia progress, or video-based event detection for personal self-care activities (ADLs)[1, 2, 3], but none has used both sensors for IADLs analysis. The proposed system uses a constraint-based ontology to model and detect events based on different sensors readings (e.g., 2D video stream data is converted to 3D geometric information that is combined with a priori semantic information, like defined spatial zones or posture estimations given by accelerometer). The ontology language is declarative and intuitive (as it uses natural terminology), allowing medical experts to define and modify the IADL models. The proposed system was tested with 44 participants (healthy=21, AD=23). A stride detection algorithm was developed by the Taiwanese team for the automatic acquisition of patients gait parameters (e.g., stride length, stride frequency) using a tri-axial accelerometer embedded in a wearable device. It was tested with 33 participants (healthy=17, Alzheimer = 16) during a 40 meters walking test. Results & Discussion: The proposed system detected the full set of activities of the first part of our clinical protocol (e.g., repeated transfer test, walking test) with a true positive rate of 96.9 % to 100%. Extracted gait parameters and automatically detected IADLs will be future analyzed for the evaluation of differences between Alzheimer patients at mild to moderate stages and healthy control participants, and for the monitoring of patients motor and cognitive abilities
    corecore